Kernel machines have sustained continuous progress in the field of quantum chemistry. In particular, they have proven to be successful in the low-data regime of force field reconstruction. This is because many physical invariances and symmetries can be incorporated into the kernel function to compensate for much larger datasets. So far, the scalability of this approach has however been hindered by its cubical runtime in the number of training points. While it is known, that iterative Krylov subspace solvers can overcome these burdens, they crucially rely on effective preconditioners, which are elusive in practice. Practical preconditioners need to be computationally efficient and numerically robust at the same time. Here, we consider the broad class of Nystr\"om-type methods to construct preconditioners based on successively more sophisticated low-rank approximations of the original kernel matrix, each of which provides a different set of computational trade-offs. All considered methods estimate the relevant subspace spanned by the kernel matrix columns using different strategies to identify a representative set of inducing points. Our comprehensive study covers the full spectrum of approaches, starting from naive random sampling to leverage score estimates and incomplete Cholesky factorizations, up to exact SVD decompositions.
translated by 谷歌翻译
来自原子模拟数据的重建力场(FF)是一个挑战,因为准确的数据可能非常昂贵。在这里,机器学习(ML)模型可以帮助成为数据经济,因为可以使用基础对称性和物理保护定律成功限制它们。但是,到目前为止,每个针对ML模型新提出的描述符都需要进行繁琐且数学繁琐的重塑。因此,我们建议在ML建模过程中使用来自算法分化的现代技术 - 有效地以更高的计算效率的阶顺序自动地使用新颖的描述符或模型。这种范式的方法不仅可以使新的表示形式的多功能用法,对FF社区的有效计算(对FF社区的高价值都高),而且还可以简单地包含进一步的物理知识,例如高阶信息(例如〜Hessians) ,更复杂的部分微分方程约束等),甚至超出了提出的FF域。
translated by 谷歌翻译
期刊影响因素(JIF)通常等同于期刊质量和提交给该期刊的论文的同行评审质量。我们通过分析提交给1,644家医学和生命科学期刊的10,000个同行评审报告,研究了同行评审与JIF的内容之间的关联。两名研究人员手工编码了2,000个句子的随机样本。然后,我们训练了机器学习模型,以将所有187,240个句子分类为贡献或不为内容类别做出贡献。我们研究了JIF DICILES定义的十组期刊与使用线性混合效应模型的同行评审的内容之间的关联,并调整了评论的长度。 JIF的范围为0.21至74.70。同行评审长度从最低(单词中位数185)增加到JIF组(387个单词)。分配给不同内容类别的句子的比例甚至在JIF组中也有很大变化。为了彻底,与最低的JIF组相比,关于“材料和方法”的句子在最高的JIF期刊中更为普遍(7.8个百分点; 95%CI 4.9至10.7%)。 “演示和报告”的趋势朝相反的方向发展,最高的JIF期刊对此类内容的重视程度较小(差异-8.9%; 95%CI -11.3至-6.5%)。为了有助于,对更高的JIF期刊的评论更少关注“建议和解决方案”,而提供的示例少于较低的影响因素期刊。对于其他内容类别而言,没有,或者只有很小的差异。总之,在讨论使用的方法时,在提出解决方案和提供示例方面,在讨论所使用的方法但较小的帮助时,较高的JIF期刊的同行评审往往更为透彻。差异是适度的,可变性很高,表明JIF是对单个手稿的同伴评论质量的不良预测指标。
translated by 谷歌翻译
机器学习(ml)越来越多地用于通知高赌注决策。作为复杂的ML模型(例如,深神经网络)通常被认为是黑匣子,已经开发了丰富的程序,以阐明其内在的工作和他们预测来的方式,定义“可解释的AI”( xai)。显着性方法根据“重要性”的某种尺寸等级等级。由于特征重要性的正式定义是缺乏的,因此难以验证这些方法。已经证明,一些显着性方法可以突出显示与预测目标(抑制变量)没有统计关联的特征。为了避免由于这种行为而误解,我们提出了这种关联的实际存在作为特征重要性的必要条件和客观初步定义。我们仔细制作了一个地面真实的数据集,其中所有统计依赖性都是明确的和线性的,作为研究抑制变量问题的基准。我们评估了关于我们的客观定义的常见解释方法,包括LRP,DTD,Patternet,图案化,石灰,锚,Shap和基于置换的方法。我们表明,大多数这些方法无法区分此设置中的抑制器的重要功能。
translated by 谷歌翻译
神经影像动物和超越的几个问题需要对多任务稀疏分层回归模型参数的推断。示例包括M / EEG逆问题,用于基于任务的FMRI分析的神经编码模型,以及气候或CPU和GPU的温度监测。在这些域中,要推断的模型参数和测量噪声都可以表现出复杂的时空结构。现有工作要么忽略时间结构,要么导致计算苛刻的推论方案。克服这些限制,我们设计了一种新颖的柔性等级贝叶斯框架,其中模型参数和噪声的时空动态被建模为具有Kronecker产品协方差结构。我们的框架中的推断是基于大大化最小化优化,并有保证的收敛属性。我们高效的算法利用了时间自传矩阵的内在riemannian几何学。对于Toeplitz矩阵描述的静止动力学,采用了循环嵌入的理论。我们证明了Convex边界属性并导出了结果算法的更新规则。在来自M / EEG的合成和真实神经数据上,我们证明了我们的方法导致性能提高。
translated by 谷歌翻译
Explainable AI transforms opaque decision strategies of ML models into explanations that are interpretable by the user, for example, identifying the contribution of each input feature to the prediction at hand. Such explanations, however, entangle the potentially multiple factors that enter into the overall complex decision strategy. We propose to disentangle explanations by finding relevant subspaces in activation space that can be mapped to more abstract human-understandable concepts and enable a joint attribution on concepts and input features. To automatically extract the desired representation, we propose new subspace analysis formulations that extend the principle of PCA and subspace analysis to explanations. These novel analyses, which we call principal relevant component analysis (PRCA) and disentangled relevant subspace analysis (DRSA), optimize relevance of projected activations rather than the more traditional variance or kurtosis. This enables a much stronger focus on subspaces that are truly relevant for the prediction and the explanation, in particular, ignoring activations or concepts to which the prediction model is invariant. Our approach is general enough to work alongside common attribution techniques such as Shapley Value, Integrated Gradients, or LRP. Our proposed methods show to be practically useful and compare favorably to the state of the art as demonstrated on benchmarks and three use cases.
translated by 谷歌翻译
Diversity Searcher is a tool originally developed to help analyse diversity in news media texts. It relies on a form of automated content analysis and thus rests on prior assumptions and depends on certain design choices related to diversity and fairness. One such design choice is the external knowledge source(s) used. In this article, we discuss implications that these sources can have on the results of content analysis. We compare two data sources that Diversity Searcher has worked with - DBpedia and Wikidata - with respect to their ontological coverage and diversity, and describe implications for the resulting analyses of text corpora. We describe a case study of the relative over- or under-representation of Belgian political parties between 1990 and 2020 in the English-language DBpedia, the Dutch-language DBpedia, and Wikidata, and highlight the many decisions needed with regard to the design of this data analysis and the assumptions behind it, as well as implications from the results. In particular, we came across a staggering over-representation of the political right in the English-language DBpedia.
translated by 谷歌翻译
Artificial intelligence(AI) systems based on deep neural networks (DNNs) and machine learning (ML) algorithms are increasingly used to solve critical problems in bioinformatics, biomedical informatics, and precision medicine. However, complex DNN or ML models that are unavoidably opaque and perceived as black-box methods, may not be able to explain why and how they make certain decisions. Such black-box models are difficult to comprehend not only for targeted users and decision-makers but also for AI developers. Besides, in sensitive areas like healthcare, explainability and accountability are not only desirable properties of AI but also legal requirements -- especially when AI may have significant impacts on human lives. Explainable artificial intelligence (XAI) is an emerging field that aims to mitigate the opaqueness of black-box models and make it possible to interpret how AI systems make their decisions with transparency. An interpretable ML model can explain how it makes predictions and which factors affect the model's outcomes. The majority of state-of-the-art interpretable ML methods have been developed in a domain-agnostic way and originate from computer vision, automated reasoning, or even statistics. Many of these methods cannot be directly applied to bioinformatics problems, without prior customization, extension, and domain adoption. In this paper, we discuss the importance of explainability with a focus on bioinformatics. We analyse and comprehensively overview of model-specific and model-agnostic interpretable ML methods and tools. Via several case studies covering bioimaging, cancer genomics, and biomedical text mining, we show how bioinformatics research could benefit from XAI methods and how they could help improve decision fairness.
translated by 谷歌翻译
Neuromorphic systems require user-friendly software to support the design and optimization of experiments. In this work, we address this need by presenting our development of a machine learning-based modeling framework for the BrainScaleS-2 neuromorphic system. This work represents an improvement over previous efforts, which either focused on the matrix-multiplication mode of BrainScaleS-2 or lacked full automation. Our framework, called hxtorch.snn, enables the hardware-in-the-loop training of spiking neural networks within PyTorch, including support for auto differentiation in a fully-automated hardware experiment workflow. In addition, hxtorch.snn facilitates seamless transitions between emulating on hardware and simulating in software. We demonstrate the capabilities of hxtorch.snn on a classification task using the Yin-Yang dataset employing a gradient-based approach with surrogate gradients and densely sampled membrane observations from the BrainScaleS-2 hardware system.
translated by 谷歌翻译
We present an automatic method for annotating images of indoor scenes with the CAD models of the objects by relying on RGB-D scans. Through a visual evaluation by 3D experts, we show that our method retrieves annotations that are at least as accurate as manual annotations, and can thus be used as ground truth without the burden of manually annotating 3D data. We do this using an analysis-by-synthesis approach, which compares renderings of the CAD models with the captured scene. We introduce a 'cloning procedure' that identifies objects that have the same geometry, to annotate these objects with the same CAD models. This allows us to obtain complete annotations for the ScanNet dataset and the recent ARKitScenes dataset.
translated by 谷歌翻译